-
Notifications
You must be signed in to change notification settings - Fork 430
OCPBUGS-51273: Don't crashloop for HAProxy init container #4963
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-51273: Don't crashloop for HAProxy init container #4963
Conversation
Previously we just crashlooped when the HAProxy init container failed, which is a normal, expected condition when HAProxy starts before CoreDNS. This is causing issues in CI because having a pod crash more than 3 times in a row is considered a failure. While it usually doesn't take that long for it to pass, we are hitting a weird timing issue during upgrades when the node is just about to reboot after MCO updates the pod definitions and it's taking longer than normal because ostree is updating the node at the same time. Since this is just a case of everything behaving as expected, let's stop failing the pod for an expected situation. This change puts the api-int call in a loop so it will just run until coredns is ready and we'll never trigger any error reporting just because of harmless timing issues.
@cybertron: This pull request references Jira Issue OCPBUGS-51273, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
/retest-required Not used in hypershift. |
/retest-required |
@cybertron: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Test failures calmed but could return any time, would be great to get this merged! |
@cybertron do we want to get this in? was just checking up on Networking bugs with component-regression labels and ended up here. looks like @dgoodwin wants it too? |
Yes, we still want this, but the hypershift job appears to be perma-failing right now so I'm holding off on a retest. I assume that is related to the ongoing ci infra outage. |
/retest-required Hypershift looks healthier now. |
/lgtm |
/assign @dkhater-redhat |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cybertron, djoshy, mkowalski The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
6d96a78
into
openshift:main
@cybertron: Jira Issue OCPBUGS-51273: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-51273 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[ART PR BUILD NOTIFIER] Distgit: ose-machine-config-operator |
Fix included in accepted release 4.19.0-0.nightly-2025-04-29-095709 |
Previously we just crashlooped when the HAProxy init container failed, which is a normal, expected condition when HAProxy starts before CoreDNS. This is causing issues in CI because having a pod crash more than 3 times in a row is considered a failure. While it usually doesn't take that long for it to pass, we are hitting a weird timing issue during upgrades when the node is just about to reboot after MCO updates the pod definitions and it's taking longer than normal because ostree is updating the node at the same time.
Since this is just a case of everything behaving as expected, let's stop failing the pod for an expected situation. This change puts the api-int call in a loop so it will just run until coredns is ready and we'll never trigger any error reporting just because of harmless timing issues.
- What I did
- How to verify it
- Description for the changelog